Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 15 de 15
Filter
1.
Anesth Analg ; 2024 Feb 26.
Article in English | MEDLINE | ID: mdl-38416597

ABSTRACT

BACKGROUND: Perioperative red blood cell (RBC) transfusions increase venous thromboembolic (VTE) events. Although a previous study found that plasma resuscitation after trauma was associated with increased VTE, the risk associated with additional perioperative plasma is unknown. METHODS: A US claims and EHR database (TriNetX Diamond Network) was queried. We compared surgical patients who received perioperative plasma and RBC to patients who received perioperative RBC but not plasma. Subanalyses included (1) all surgeries (n = 48,580) and (2) cardiovascular surgeries (n = 38,918). Propensity score matching was performed for age at surgery, ethnicity, race, sex, overweight and obesity, type 2 diabetes, disorders of lipoprotein metabolism, essential hypertension, neoplasms, nicotine dependence, coagulopathies, sepsis, chronic kidney disease, liver disease, nonsteroidal anti-inflammatory analgesics, platelet aggregation inhibitors, anticoagulants, hemoglobin level, outpatient service utilization, and inpatient services; surgery type was included for "all surgeries" analyses. Outcomes included 30-day mortality, postoperative VTE, pulmonary embolism (PE), and disseminated intravascular coagulation (DIC). RESULTS: After matching the surgical cohorts, compared to only RBC, plasma + RBC was associated with higher risk of postoperative mortality (4.52% vs 3.32%, risk ratio [RR]: 1.36 [95% confidence interval, 1.24-1.49]), VTE (3.92% vs 2.70%, RR: 1.36 [1.24-1.49]), PE (1.94% vs 1.33%, RR: 1.46 [1.26-1.68]), and DIC (0.96% vs 0.35%, RR: 2.75 [2.15-3.53]). Among perioperative cardiovascular patients, adding plasma to RBC transfusion was associated with similar increased risk. CONCLUSIONS: When compared with perioperative RBC transfusion, adding plasma was associated with increased 30-day postoperative mortality, VTE, PE, and DIC risk among surgical and cardiovascular surgical patients. Reducing unnecessary plasma transfusion should be a focus of patient blood management to improve overall value in health care.

2.
Anesthesiol Clin ; 41(1): 79-102, 2023 Mar.
Article in English | MEDLINE | ID: mdl-36872008

ABSTRACT

Mechanical circulatory support (MCS) devices provide temporary or intermediate- to long-term support for acute cardiopulmonary support. In the last 20 to 30 years, tremendous growth in MCS device usage has been seen. These devices offer support for isolated respiratory failure, isolated cardiac failure, or both. Initiation of MCS devices requires the input from multidisciplinary teams using patient factors and institutional resources to guide decision making, along with a planned "exit strategy" for bridge to decision, bridge to transplant, bridge to recovery, or as destination therapy. Important considerations for MCS use include patient selection, cannulation/insertion strategies, and complications of each device.


Subject(s)
Cognition , Heart Failure , Humans , Patient Selection
3.
Curr Opin Anaesthesiol ; 36(1): 57-60, 2023 Feb 01.
Article in English | MEDLINE | ID: mdl-36550605

ABSTRACT

PURPOSE OF REVIEW: Development of advanced and minimally invasive surgical procedures is providing treatment opportunities to older and higher risk patients. This has also led to highly specialized physicians and a need for better communication and planning with the patients and within the care team. RECENT FINDINGS: In the field of cardiac surgery, the heart team model has been advocated and implemented as a vehicle to optimize decision making prior to procedure, care during the procedure and in the recovery process. The goal is to provide a treatment path that prioritizes the patient's goals and to anticipate and minimize complications. SUMMARY: In this review, we discuss the concepts of shared decision making (SDM) and implementation science in the context of the complex cardiac patient. We also review the most recent evidence for their use in cardiac surgery. We argue that a team model not only bridges knowledge gaps but provides a multidisciplinary environment for the practice of SDM and implementation of evidence-based practices. Be believe this will provide patients with a better experience as they navigate their care and improve their medical outcomes as well.


Subject(s)
Decision Making, Shared , Thoracic Surgery , Humans , Decision Making
4.
J Clin Anesth ; 85: 111040, 2023 05.
Article in English | MEDLINE | ID: mdl-36549035

ABSTRACT

BACKGROUND: Immediate postoperative extubation (IPE) can reduce perioperative complications and length of stay (LOS), however it is performed variably after liver transplant across institutions and has historically excluded high-risk recipients from consideration. In late 2012, we planned and implemented a single academic institution structured quality improvement (QI) initiative to standardize perioperative care of liver transplant recipients without exceptions. We hypothesized that such an approach would lead to a sustained increase in IPE after primary (PAC) and delayed abdominal closure (DAC). METHODS: We retrospectively studied 591 patients from 2013 to 2018 who underwent liver transplant after initiative implementation. We evaluated trends in incidence of IPE versus delayed extubation (DE), and reintubation, LOS, and mortality. RESULTS: Overall, 476/591 (80.5%) recipients underwent PAC (278 IPE, 198 DE) and 115/591 (19.5%) experienced DAC (39 IPE, 76 DE). When comparing data from 2013 to data from 2018, the incidence of IPE increased from 9/67 (13.4%) to 78/90 (86.7%) after PAC and from 1/12 (8.3%) to 16/23 (69.6%) after DAC. For the same years, the incidence of IPE after PAC for recipients with MELD scores ≥30 increased from 0/19 (0%) to 12/17 (70.6%), for recipients who underwent simultaneous liver-kidney transplant increased from 1/8 (12.5%) to 4/5 (80.0%), and for recipients who received massive transfusion (>10 units of packed red blood cells) increased from 0/17 (0%) to 10/13 (76.9%). Reintubation for respiratory considerations <48 h after IPE occurred in 3/278 (1.1%) after PAC and 1/39 (2.6%) after DAC. IPE was associated with decreased intensive care unit (HR of discharge: 1.92; 95% CI: 1.58, 2.33; P < 0.001) and hospital LOS (HR of discharge: 1.45; 95% CI: 1.20, 1.76; P < 0.001) but demonstrated no association with mortality. CONCLUSION: A structured QI initiative led to sustained high rates of IPE and reduced LOS in all liver transplant recipients, including those classified as high risk.


Subject(s)
Liver Transplantation , Humans , Liver Transplantation/adverse effects , Retrospective Studies , Airway Extubation/adverse effects , Liver , Postoperative Period , Length of Stay
5.
Exp Clin Transplant ; 20(9): 817-825, 2022 09.
Article in English | MEDLINE | ID: mdl-36169104

ABSTRACT

OBJECTIVES: RecombinanthumanactivatedfactorVIIahas been usedprophylactically to mitigate requirements for transfusion in liver transplant. We explored its effectiveness andrisks amonglivertransplantrecipients at high risk for massive transfusion. MATERIALS AND METHODS: We performed a retrospective study of recipients who underwent liver transplant from 2012 to 2015. Patients considered at risk for massive transfusion received up to two 20 µg/kg doses of recombinant human activated factor VIIa, with rescue use permitted for other patients. We used propensity matching to determine the average treatment effectson patients who received recombinant human activated factor VIIa prophylactically to prevent massive transfusion. We determined thromboembolic events from medical record review. RESULTS: Of 234 liver transplant recipients, 38 received prophylactic and 2 received rescue recombinant human activated factor VIIa. We used a prediction model to readily identify those who would receive prophylactic recombinant human activated factorVIIa (C statistic = 0.885; 95% CI, 0.835-0.935). Propensity matching achieved balance, particularly for massive transfusion. Twenty-three of 38 patients (60.5%) who received recombinant human activated factorVIIa and 47 of 76 matched controls (61.8%) experienced massive transfusion. The coefficient for the average treatment effect of prophylactic administration was - 0.013 (95% CI, -0.260 to 0.233; P = .92). The cohorts exhibited no difference in number ofthromboembolic events (P > .99), although fatal events occurred in 1 patient who had prophylactic and 1 patient who had rescue recombinant human activated factor VIIa. CONCLUSIONS: Prophylactic recombinanthumanactivated factor VIIa use in patients at elevated risk of massive transfusion did not affect incidence of massive transfusion and was not associated with an increase in thromboembolic events overall. The lack of clinical benefit and the potential for fatal throm-boembolic events observed with recombinant human activated factor VIIa precluded its prophylactic use in liver transplant recipients.


Subject(s)
Factor VIIa , Liver Transplantation , Factor VIIa/adverse effects , Humans , Liver Transplantation/adverse effects , Recombinant Proteins/adverse effects , Retrospective Studies , Treatment Outcome
6.
Anesth Analg ; 135(3): 567-575, 2022 09 01.
Article in English | MEDLINE | ID: mdl-35426835

ABSTRACT

BACKGROUND: Patients presenting with acute coronary syndrome are administered a P2Y 12 inhibitor and aspirin before coronary catheterization to prevent further myocardial injury from thrombosis. Guidelines recommend a standard waiting period between the time patients are administered dual antiplatelet therapy (DAPT) and elective cardiac surgery. Since 25% to 30% of the population may be considered nonresponders to clopidogrel, platelet function testing can be utilized for timing of surgery and to assess bleeding risks. The extent to which a standard waiting period or platelet function testing is used across centers is not established, representing an important opportunity to standardize practice. METHODS: We conducted a retrospective cohort study from 2011 to 2020 using data from the Maryland Cardiac Surgical Quality Initiative, a consortium of all 10 hospitals in the state performing cardiac surgery. The proportion of patients administered DAPT within 5 days of surgery was examined by hospital over the time period. Mixed-effects multivariable logistic regressions were used to examine the association of preoperative DAPT with ischemic and bleeding outcomes. Centers were surveyed on use or nonuse of preoperative platelet function testing, and bleeding outcomes were compared. RESULTS: There was significant heterogeneity of preoperative DAPT usage across centers ranging from 2% to 54% ( P < .001). DAPT within 5 days of isolated coronary artery bypass grafting (CABG) was associated with higher odds of reoperation for bleeding (odds ratio [OR], 1.55; 95% confidence interval [CI], 1.19-2.01; P = .001), >2 units of red blood cells (RBCs) transfused (OR, 1.62; 95% CI, 1.44-1.81; P < .001), and >2 units of non-RBCs transfused (OR, 1.79; 95% CI, 1.60-2.00; P < .001). In the 5 hospitals using preoperative platelet function testing to guide timing of surgery, there were greater odds for DAPT within 5 days (OR, 1.33; 95% CI, 1.22-1.45; P < .001), fewer RBCs >2 units transfusions (22% vs 33%; P < .001), and non-RBCs >2 units (17% vs 28%; P < .001) transfusions within DAPT patients. CONCLUSIONS: There is significant variability in DAPT usage within 5 days of CABG between hospital centers. Preoperative platelet function testing may allow for earlier timing of surgery for those on DAPT without increased bleeding risks.


Subject(s)
Coronary Artery Bypass , Platelet Aggregation Inhibitors , Clopidogrel/adverse effects , Coronary Artery Bypass/adverse effects , Drug Therapy, Combination , Humans , Maryland/epidemiology , Platelet Aggregation Inhibitors/therapeutic use , Retrospective Studies , Treatment Outcome
7.
Semin Cardiothorac Vasc Anesth ; 26(3): 173-178, 2022 Sep.
Article in English | MEDLINE | ID: mdl-35130773

ABSTRACT

The medical community is increasingly aware of the need for high-quality and high-value patient care. Anesthesiologists in particular have long demonstrated leadership in the field of quality and safety. Cardiothoracic anesthesiologists can improve the quality of care delivered to cardiac patients both with anesthesia-specific practices and in a team-based approach with other perioperative care providers. Collecting large volumes of multicentered data to study, measure, and improve anesthesia care is one of the many commitments of cardiothoracic anesthesiologists to this cause. This article reviews this and other aspects of the work of cardiothoracic anesthesiologists to improve value-added care to cardiac patients.


Subject(s)
Anesthesia , Anesthesiology , Cardiac Surgical Procedures , Anesthesiologists , Humans , Perioperative Care
8.
JMIRx Med ; 2(3): e24645, 2021 Jul 12.
Article in English | MEDLINE | ID: mdl-37725551

ABSTRACT

BACKGROUND: The modified early warning score (MEWS) is an objective measure of illness severity that promotes early recognition of clinical deterioration in critically ill patients. Its primary use is to facilitate faster intervention or increase the level of care. Despite its adoption in some African countries, MEWS is not standard of care in Ghana. In order to facilitate the use of such a tool, we assessed whether MEWS, or a combination of the more limited data that are routinely collected in current clinical practice, can be used predict to mortality among critically ill inpatients at the Korle-Bu Teaching Hospital in Accra, Ghana. OBJECTIVE: The aim of this study was to identify the predictive ability of MEWS for medical inpatients at risk of mortality and its comparability to a measure combining routinely measured physiologic parameters (limited MEWS [LMEWS]). METHODS: We conducted a retrospective study of medical inpatients, aged ≥13 years and admitted to the Korle-Bu Teaching Hospital from January 2017 to March 2019. Routine vital signs at 48 hours post admission were coded to obtain LMEWS values. The level of consciousness was imputed from medical records and combined with LMEWS to obtain the full MEWS value. A predictive model comparing mortality among patients with a significant MEWS value or LMEWS ≥4 versus a nonsignificant MEWS value or LMEWS <4 was designed using multiple logistic regression and internally validated for predictive accuracy, using the receiver operating characteristic (ROC) curve. RESULTS: A total of 112 patients were included in the study. The adjusted odds of death comparing patients with a significant MEWS to patients with a nonsignificant MEWS was 6.33 (95% CI 1.96-20.48). Similarly, the adjusted odds of death comparing patients with a significant versus nonsignificant LMEWS value was 8.22 (95% CI 2.45-27.56). The ROC curve for each analysis had a C-statistic of 0.83 and 0.84, respectively. CONCLUSIONS: LMEWS is a good predictor of mortality and comparable to MEWS. Adoption of LMEWS can be implemented now using currently available data to identify medical inpatients at risk of death in order to improve care.

9.
Transfusion ; 60(11): 2565-2580, 2020 11.
Article in English | MEDLINE | ID: mdl-32920876

ABSTRACT

BACKGROUND: Intraoperative massive transfusion (MT) is common during liver transplantation (LT). A predictive model of MT has the potential to improve use of blood bank resources. STUDY DESIGN AND METHODS: Development and validation cohorts were identified among deceased-donor LT recipients from 2010 to 2016. A multivariable model of MT generated from the development cohort was validated with the validation cohort and refined using both cohorts. The combined cohort also validated the previously reported McCluskey risk index (McRI). A simple modified risk index (ModRI) was then created from the combined cohort. Finally, a method to translate model predictions to a population-specific blood allocation strategy was described and demonstrated for the study population. RESULTS: Of the 403 patients, 60 (29.6%) in the development and 51 (25.5%) in the validation cohort met the definition for MT. The ModRI, derived from variables incorporated into multivariable model, ranged from 0 to 5, where 1 point each was assigned for hemoglobin level of less than 10 g/dL, platelet count of less than 100 × 109 /dL, thromboelastography R interval of more than 6 minutes, simultaneous liver and kidney transplant and retransplantation, and a ModRI of more than 2 defined recipients at risk for MT. The multivariable model, McRI, and ModRI demonstrated good discrimination (c statistic [95% CI], 0.77 [0.70-0.84]; 0.69 [0.62-0.76]; and 0.72 [0.65-0.79], respectively, after correction for optimism). For blood allocation of 6 or 15 units of red blood cells (RBCs) based on risk of MT, the ModRI would prevent unnecessary crossmatching of 300 units of RBCs/100 transplants. CONCLUSIONS: Risk indices of MT in LT can be effective for risk stratification and reducing unnecessary blood bank resource utilization.


Subject(s)
Blood Banks , Blood Transfusion , Intraoperative Care , Liver Transplantation , Models, Biological , Cohort Studies , Female , Humans , Male , Middle Aged , Risk Factors
10.
Infect Control Hosp Epidemiol ; 40(3): 287-300, 2019 03.
Article in English | MEDLINE | ID: mdl-30786946

ABSTRACT

BACKGROUND: Surgical site infections (SSIs) portend high patient morbidity and mortality. Although evidence-based clinical interventions can reduce SSIs, they are not reliably delivered in practice, and data are limited on the best approach to improve adherence. OBJECTIVE: To summarize implementation strategies aimed at improving adherence to evidence-based interventions that reduce SSIs. DESIGN: Systematic reviewMethods:We searched PubMed, Embase, CINAHL, the Cochrane Library, the WHO Regional databases, AFROLIB, and Africa-Wide for studies published between January 1990 and December 2015. The Effective Practice and Organization Care (EPOC) criteria were used to identify an acceptable-quality study design. We used structured forms to extract data on implementation strategies and grouped them into an implementation model called the "Four Es" framework (ie, engage, educate, execute, and evaluate). RESULTS: In total, 125 studies met our inclusion criteria, but only 8 studies met the EPOC criteria, which limited our ability to identify best practices. Most studies used multifaceted strategies to improve adherence with evidence-based interventions. Engagement strategies included multidisciplinary work and strong leadership involvement. Education strategies included various approaches to introduce evidence-based practices to clinicians and patients. Execution strategies standardized the interventions into simple tasks to facilitate uptake. Evaluation strategies assessed adherence with evidence-based interventions and patient outcomes, providing feedback of performance to providers. CONCLUSIONS: Multifaceted implementation strategies represent the most common approach to facilitating the adoption of evidence-based practices. We believe that this summary of implementation strategies complements existing clinical guidelines and may accelerate efforts to reduce SSIs.


Subject(s)
Surgical Wound Infection/prevention & control , Evidence-Based Practice/statistics & numerical data , Hospitals/statistics & numerical data , Humans , Practice Guidelines as Topic
11.
J Natl Med Assoc ; 110(4): 407-413, 2018 Aug.
Article in English | MEDLINE | ID: mdl-30126569

ABSTRACT

INTRODUCTION: Little is known about the state of resuscitation services in low- and middle-income countries (LMICs), including Nigeria, Africa's most populous country. We sought to assess the cardiopulmonary resuscitation (CPR) care in referral hospitals across Nigeria to better inform capacity-building initiatives. METHODS: We designed a survey to evaluate infrastructure, equipment, personnel, training, and clinical management, as no standardized instrument for assessing resuscitation in LMICs was available. We included referral teaching hospitals with a functioning intensive care unit (ICU) and a department of anaesthesiology. We pilot-tested our tool at four hospitals in Nigeria and recruited participants electronically via the Nigerian Society of Anaesthetists directory. RESULTS: Our survey included 17 hospitals (82% public, 12% private, 6% public-private partnership), although some questions include only a subset of these. We found that 20% (3 out of 15) of hospitals had a cardiac arrest response team system, 21% (3/14) documented CPR events, and 21% (3/14) reviewed such events for education and quality improvement. Most basic supplies were sufficient in the ICU (100% [15/15] availability of defibrillators, 94% [16/17] of adrenaline) but were less available in other departments. While 67% [10/15] of hospitals had a resuscitation training program, only 27% [4/15] had at least half their physicians trained in basic life support. CONCLUSION: In this first large-scale assessment of resuscitation care in Nigeria, we found progress in training centre development and supply availability, but a paucity of cardiac arrest response team systems. Our data indicate a need for improved capacity development, especially in documentation and continuous quality improvement, both of which are low-cost solutions.


Subject(s)
Cardiopulmonary Resuscitation/statistics & numerical data , Hospitals, Teaching/statistics & numerical data , Capacity Building , Cardiopulmonary Resuscitation/education , Equipment and Supplies, Hospital/statistics & numerical data , Female , Global Health , Health Care Surveys , Hospital Design and Construction , Hospitals, Teaching/organization & administration , Humans , Intensive Care Units , Male , Nigeria , Referral and Consultation
12.
Biomarkers ; 23(1): 61-69, 2018 Feb.
Article in English | MEDLINE | ID: mdl-29034718

ABSTRACT

OBJECTIVES AND METHODS: The Furosemide Stress Test (FST) is a novel dynamic assessment of tubular function that has been shown in preliminary studies to predict patients who will progress to advanced stage acute kidney injury, including those who receive renal replacement therapy (RRT). The aim of this study is to investigate if the urinary response to a single intraoperative dose of intravenous furosemide predicts delayed graft function (DGF) in patients undergoing deceased donor kidney transplant. RESULTS: On an adjusted multiple logistic regression, a single 100 mg dose of intraoperative furosemide after the anastomosis of the renal vessels (FST) predicted the need for RRT at 2 and 6 h post kidney transplantation (KT). Recipient urinary output was measured at 2 and 6 h post furosemide administration. In receiver-operating characteristic (ROC) analysis, the FST predicted DGF with an area-under-the curve of 0.85 at an optimal urinary output cut-off of <600 mls at 6 h with a sensitivity of and a specificity of 83% and 74%, respectively. CONCLUSIONS: The FST is a predictor of DGF post kidney transplant and has the potential to identify patients requiring RRT early after KT.


Subject(s)
Delayed Graft Function/diagnosis , Furosemide/administration & dosage , Kidney Transplantation/methods , Tissue Donors , Adult , Delayed Graft Function/physiopathology , Diuretics/administration & dosage , Female , Humans , Logistic Models , Male , Middle Aged , Predictive Value of Tests , Prognosis , ROC Curve , Retrospective Studies
13.
Anesth Analg ; 124(5): 1644-1652, 2017 05.
Article in English | MEDLINE | ID: mdl-28426586

ABSTRACT

BACKGROUND: Patients undergoing liver transplantation frequently but inconsistently require massive blood transfusion. The ability to predict massive transfusion (MT) could reduce the impact on blood bank resources through customization of the blood order schedule. Current predictive models of MT for blood product utilization during liver transplantation are not generally applicable to individual institutions owing to variability in patient population, intraoperative management, and definitions of MT. Moreover, existing models may be limited by not incorporating cirrhosis stage or thromboelastography (TEG) parameters. METHODS: This retrospective cohort study included all patients who underwent deceased-donor liver transplantation at the Johns Hopkins Hospital between 2010 and 2014. We defined MT as intraoperative transfusion of > 10 units of packed red blood cells (pRBCs) and developed a multivariable predictive model of MT that incorporated cirrhosis stage and TEG parameters. The accuracy of the model was assessed with the goodness-of-fit test, receiver operating characteristic analysis, and bootstrap resampling. The distribution of correct patient classification was then determined as we varied the model threshold for classifying MT. Finally, the potential impact of these predictions on blood bank resources was examined. RESULTS: Two hundred three patients were included in the study. Sixty (29.6%) patients met the definition for MT and received a median (interquartile range) of 19.0 (14.0-27.0) pRBC units intraoperatively compared with 4.0 units (1.0-6.0) for those who did not satisfy the criterion for MT. The multivariable model for predicting MT included Model for End-stage Liver Disease score, whether simultaneous liver and kidney transplant was performed, cirrhosis stage, hemoglobin concentration, platelet concentration, and TEG R interval and angle. This model demonstrated good calibration (Hosmer-Lemeshow goodness-of-fit test P = .45) and good discrimination (c statistic: 0.835; 95% confidence interval, 0.781-0.888). A probability cutoff threshold of 0.25 was found to misclassify only 4 of 100 patients as unlikely to experience MT, with the majority such misclassifications within 4 units of the working definition for MT. For this threshold, a preoperative blood ordering schedule that allocated 6 units of pRBCs for those unlikely to experience MT and 15 for those who were likely to experience MT would prevent unnecessary crossmatching of 338 units/100 transplants. CONCLUSIONS: When clinical and laboratory parameters are included, a model predicting intraoperative MT in patients undergoing liver transplantation is sufficiently accurate that its predictions could guide the blood order schedule for individual patients based on institutional data, thereby reducing the impact on blood bank resources. Ongoing evaluation of model accuracy and transfusion practices is required to ensure continuing performance of the predictive model.


Subject(s)
Blood Banks/statistics & numerical data , Blood Transfusion/methods , Liver Transplantation/methods , Algorithms , Cohort Studies , End Stage Liver Disease/blood , End Stage Liver Disease/surgery , Female , Humans , Intraoperative Care , Male , Middle Aged , Models, Statistical , Predictive Value of Tests , Retrospective Studies , Thrombelastography , Treatment Outcome
14.
Anesthesiology ; 125(4): 814-5, 2016 10.
Article in English | MEDLINE | ID: mdl-27649432
15.
Anesthesiology ; 124(3): 561-9, 2016 Mar.
Article in English | MEDLINE | ID: mdl-26881395

ABSTRACT

BACKGROUND: Anesthesia is integral to improving surgical care in low-resource settings. Anesthesia providers who work in these areas should be familiar with the particularities associated with providing care in these settings, including the types and outcomes of commonly performed anesthetic procedures. METHODS: The authors conducted a retrospective analysis of anesthetic procedures performed at Médecins Sans Frontières facilities from July 2008 to June 2014. The authors collected data on patient demographics, procedural characteristics, and patient outcome. The factors associated with perioperative mortality were analyzed. RESULTS: Over the 6-yr period, 75,536 anesthetics were provided to adult patients. The most common anesthesia techniques were spinal anesthesia (45.56%) and general anesthesia without intubation (33.85%). Overall perioperative mortality was 0.25%. Emergent procedures (0.41%; adjusted odds ratio [AOR], 15.86; 95% CI, 2.14 to 115.58), specialized surgeries (2.74%; AOR, 3.82; 95% CI, 1.27 to 11.47), and surgical duration more than 6 h (9.76%; AOR, 4.02; 95% CI, 1.09 to 14.88) were associated with higher odds of mortality than elective surgeries, minor surgeries, and surgical duration less than 1 h, respectively. Compared with general anesthesia with intubation, spinal anesthesia, regional anesthesia, and general anesthesia without intubation were associated with lower perioperative mortality rates of 0.04% (AOR, 0.10; 95% CI, 0.05 to 0.18), 0.06% (AOR, 0.26; 95% CI, 0.08 to 0.92), and 0.14% (AOR, 0.29; 95% CI, 0.18 to 0.45), respectively. CONCLUSIONS: A wide range of anesthetics can be carried out safely in resource-limited settings. Providers need to be aware of the potential risks and the outcomes associated with anesthesia administration in these settings.


Subject(s)
Anesthesia/economics , Health Resources/economics , Medical Missions/economics , Patient Care/economics , Physicians/economics , Adolescent , Adult , Aged , Anesthesia/methods , Anesthesia/trends , Female , Health Resources/trends , Humans , Male , Medical Missions/trends , Middle Aged , Patient Care/methods , Patient Care/trends , Physicians/trends , Retrospective Studies , Time Factors , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...